Skip to content

Conversation

@cairijun
Copy link

@cairijun cairijun commented Dec 18, 2025

Description

As discussed in #2506, we want a basic MCP client implemented in CodeCompanion.

Core design:

  • MCP.Clients are started on first chat creation.
  • Tools from MCP servers are loaded as CodeCompanion tools.
  • Tools from the same MCP server are grouped as a tool group.
  • User can use @{mcpServerName} to grant tool access to the LLM.

Features:

Category Feature Supported Notes
Transport Stdio Yes
Transport Streamable HTTP No No plan; can use mcp-remote
Basic Cancellation Partial Only triggered on timeout
Basic Progress No Possible future improvement
Basic Task No No plan; too complex and marked experimental in the Spec
Client Roots Yes Disabled by default
Client Sampling No No plan; needs concrete examples to justify
Client Elicitation No No plan; needs concrete examples
Server Tools Yes Currently only supports Text Content
Server Tool list changed notification No No plan; needs concrete examples to demonstrate necessarity
Server Prompts No Planned; needs concrete examples to help design UX
Server Resources No Planned
Server Completion No No plan; depends on Prompts UX
Server Logging Partial Currently implemented by forwarding JSON-RPC messages to log:info
Server Pagination Yes

Config example:

local config = { ... }
config.mcp.servers = {
  -- a simple case
  tavilyMcp = {
    cmd = { "npx", "-y", "tavily-mcp@latest" },
    env = {
      TAVILY_API_KEY = "cmd:cat ~/.config/tavily_api_key",
    },
  },
  -- workspace roots reporting
  filesystem = {
    cmd = { "npx", "-y", "@modelcontextprotocol/server-filesystem" },
    roots = function()
      local roots =
        { { name = "Current Working Directory", uri = vim.uri_from_fname(vim.fn.getcwd()) } }
      vim
        .iter(vim.lsp.get_clients())
        :map(function(client)
          return client.workspace_folders
        end)
        :flatten()
        :each(function(folder)
          table.insert(roots, { name = "LSP Workspace", uri = folder.uri })
        end)
      return roots
    end,
    register_roots_list_changed = function(notify)
      vim.api.nvim_create_autocmd("LspAttach", {
        callback = function()
          notify()
        end,
      })
    end,
  },
  -- override output handlers
  sequentialThinking = {
    cmd = { "npx", "-y", "@modelcontextprotocol/server-sequential-thinking" },
    tool_overrides = {
      sequentialthinking = {
        output = {
          success = function(self, tools, cmd, stdout)
            local output = stdout and stdout[#stdout]
            local msg = "Sequential thinking: " .. self.args.thought
            tools.chat:add_tool_output(self, output, msg)
          end,
        },
      },
    },
  },
  -- a more complex example
  runPython = {
    cmd = { "uvx", "mcp-run-python", "--deps", "numpy,pandas", "stdio" },
    -- override server-level prompt
    server_instructions = function(orig)
      return orig
        .. "\nThe Python environment has the following packages pre-installed: numpy, pandas."
        .. " You can only use these packages and standard library modules."
    end,
    -- set default tool opts
    default_tool_opts = {
      require_approval_before = true,
    },
    tool_overrides = {
      run_python_code = {
        timeout_ms = 5 * 60 * 1000, -- 5 minutes
        output = {
          prompt = function(self)
            local args = self.args
            return string.format(
              "Confirm to execute the following Python code:\n```python\n%s\n```\nGlobal Variables:\n%s",
              args.python_code,
              vim.inspect(args.global_variables)
            )
          end,
        },
      },
    },
  },
}

require("codecompanion").setup(config)

Related Issue(s)

#2506

Screenshots

image image

Checklist

  • I've read the contributing guidelines and have adhered to them in this PR
  • I've added test coverage for this fix/feature
  • I've run make all to ensure docs are generated, tests pass and my formatting is applied
  • (optional) I've updated CodeCompanion.has in the init.lua file for my new feature
  • (optional) I've updated the README and/or relevant docs pages

@olimorris olimorris added the P3 Low impact, low urgency label Dec 18, 2025
@olimorris
Copy link
Owner

Fantastic, thank you @cairijun. The holidays start from Monday for me, so I will get around to properly testing this out, then. But this will be priority 1 for me.

Copy link
Owner

@olimorris olimorris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Firstly, this is awesome and I'm very excited about this making its way into CodeCompanion.

I've done a first pass review. Nothing major, lots of nitpicking. Then I'll give it some proper testing over the holiday period.

Giving some thoughts to the UX, I think the distinction between MCP "tools" and regular tools, the former should be visible in the chat buffer as @{mcp.runPython}. It makes it easier for users to filter via their completion plugin and makes the distinction clear.

Seeing how you've implemented the MCP servers:

sequentialThinking = {
    cmd = { "npx", "-y", "@modelcontextprotocol/server-sequential-thinking" },
    tool_overrides = {
      sequentialthinking = {
        output = {
          success = function(self, tools, cmd, stdout)
            local output = stdout and stdout[#stdout]
            local msg = "Sequential thinking: " .. self.args.thought
            tools.chat:add_tool_output(self, output, msg)
          end,
        },
      },
    },
  },

Is brilliant 👏🏼. The tool_bridge.lua file is an excellent idea. Ideally, we wouldn't have sequentialthinking appear twice in the same config block but this can be tackled towards the end of the PR.

Seeing success = function(self, tools, cmd, stdout)...this is on me to refactor. I'm trying to move everything over to success = function(self, args) pattern as it's future proof should we add additional arguments. I realized this pattern far too late on. So heads up, this may change in the future.

Once again, thanks for this brilliant PR.

local M = {}

---Default tool output callbacks
local DefaultOutputCallbacks = {}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be snake case.

Also, using Default implies that there is an alternative for these callbacks. Something other than the defaults`.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We indeed have alternatives. The user can override some of the callbacks via tool_overrides.{TOOL_NAME}.output config table. I changed it to local default_output = { ... }.

Copilot AI review requested due to automatic review settings December 25, 2025 10:20
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a basic MCP (Model Context Protocol) client for CodeCompanion, enabling integration with MCP servers to expose their tools as CodeCompanion tools. The implementation supports stdio transport, tool pagination, roots capability, and customizable tool behavior.

Key changes:

  • Core MCP client with JSON-RPC communication over stdio transport
  • Tool bridge to convert MCP tools into CodeCompanion tool format with grouping by server
  • Comprehensive test coverage with mock transport for unit and integration testing

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
tests/stubs/mcp/tools.jsonl Test fixtures containing sample MCP tool definitions for testing
tests/mocks/mcp_client_transport.lua Mock transport implementation enabling testable JSON-RPC communication without external processes
tests/mcp/test_mcp_client.lua Unit tests for MCP client initialization, tool loading, tool calls, timeouts, and roots capability
tests/interactions/chat/mcp/test_mcp_tools.lua Integration tests verifying MCP tools work within chat interactions with tool overrides
tests/config.lua Adds empty MCP config section to test configuration
lua/codecompanion/types.lua Type definitions for MCP protocol structures (JSON-RPC, tools, content blocks)
lua/codecompanion/mcp/tool_bridge.lua Converts MCP tool specifications to CodeCompanion tools with customizable output handlers
lua/codecompanion/mcp/init.lua Module entry point that starts all configured MCP servers on first chat creation
lua/codecompanion/mcp/client.lua Main client implementation handling transport, JSON-RPC, initialization, and tool operations
lua/codecompanion/interactions/chat/tools/init.lua Updates JSON decode to handle null values properly for MCP tool arguments
lua/codecompanion/interactions/chat/init.lua Integrates MCP startup into chat initialization flow
lua/codecompanion/config.lua Adds default MCP configuration with empty servers table

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@cairijun
Copy link
Author

cairijun commented Dec 25, 2025

Hi @olimorris , thanks for your detailed and constructive comments! I've been a bit busy on some personal matters recently so I'm replying slower than usual.

I've push some commits addressing most of the comments.

The generated tool groups are now named mcp.{SERVER_NAME}, while the tools are named mcp_{SERVER_NAME}_{ORIG_TOOL_NAME} without the use of ., as it is not supported by many LLM providers. This inconsistency is not visible to the users as they always use the groups and the tools are actually hidden.

There's also a major change that I moved the output formatting logic from cmds to success/error. Now the user provided output handlers receive the original MCP tool result content objects. It makes the handlers a bit more verbose for simple cases, but must more flexible for complex tool results. I think this is acceptable since you normally don't need to override them for simple cases.

@olimorris
Copy link
Owner

No worries @cairijun and no rush. I too have been in the same boat. We'll definitely get this PR into CodeCompanion 👍🏼. I will be picking this back up next week too.

@olimorris
Copy link
Owner

I've got some time planned for this weekend to take a look at this. Looking forward to it.

@olimorris olimorris dismissed their stale review January 11, 2026 12:14

Completed

Copy link
Owner

@olimorris olimorris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial reaction:

This is awesome. Really, really awesome. The fact all I needed to do to get this working, was:

      mcp = {
        servers = {
          ["tavily-mcp"] = {
            cmd = { "npx", "-y", "tavily-mcp@latest" },
            env = {
              TAVILY_API_KEY = "cmd:op read op://personal/Tavily_API/credential --no-newline",
            },
          },
        },
      },

...is tremendous.

I've put some inline comments. Pretty much minor stuff now. I will write the docs and add some more test coverage (think we might need a bit of coverage around mcp servers appearing in the completion list).

There's a couple of (hopefully) small hurdles I've noticed:

  • In my example above, I'm being prompted by 1Password everytime I create a new chat buffer. I may not want to add the Tavily MCP server to this chat buffer so think this prompt should be when I've pressed submit to send the response to the LLM.
  • It can be hit-and-miss if the MCP server appears in the completion list in the chat buffer. I notice when I open a fresh Neovim, create a chat buffer, I often don't see the MCP servers. If I open a second chat buffer, I do see them. I think we may need to shift the logic of creating the MCP groups to the providers.completion.init file like we do for all other tools and groups.

Otherwise, this is amazing and I can't wait to merge this into CodeCompanion. No rush and take your time. We all have lives outside of open-source. And, if you need me to pickup any bits, just say.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a couple of LSP warnings in this file that are worth addressing.

@@ -0,0 +1,194 @@
local log = require("codecompanion.utils.log")

local CONSTANTS = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've extracted some messaging to the top of the file as I'm likely to tweak this over time. Also, preference is to have @{mcp:tavily} rather than @{mcp.tavily} as I think it visually clarifies tavily is part of the MCP group (super minor nitpick).

table.insert(server_prompt, server_instructions)
end

chat_tools.groups[fmt("%s%s", CONSTANTS.TOOL_PREFIX, client.name)] = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've noticed that MCP tools appearing in the completion menu in the chat buffer to be temperamental. I'm assuming there is a timing difference between when these are resolved in the chat buffer and when the providers.completion.init file resolves them. I do think that logic should be moved to the latter file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

P3 Low impact, low urgency

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants